POV-Ray : Newsgroups : povray.general : Status of Moray? : Re: New SDL for POVRay Server Time
2 Jul 2025 01:17:26 EDT (-0400)
  Re: New SDL for POVRay  
From: John VanSickle
Date: 13 Oct 2007 20:37:16
Message: <471164bc@news.povray.org>
Patrick Elliott wrote:
> Only problem I could see with that is that someone may want to use 
> multiple cameras, like say... stereoscopic effects. I mean, its not 
> totally absurd to consider someone wanting to either a) render bath eyes 
> at the same time, if they have the speed and the language allowed it, or 
> even doing something crazier, like having a camera that "looks" at some 
> other location, which is used as part of the image some place else. A 
> good example would be something like a security booth. You have a 
> monitor on the desk, which shows a kind of post process "filtered" 
> effect in black and white, of what is "seen" by a camera that is also 
> *visible* in the larger scene, but in color, as is most of the rest of 
> what is in the camera view.
> 
> Yeah, I know, you can do two renders, one from the camera view, then 
> post process that, then use the result as an image map on the monitor, 
> but is is just an example. Its possible that someone could have a scene 
> where this was either seriously impractical, or completely impossible, 
> unless the "camera view" was being produced in the same render as the 
> final image. Then again, what do I know. lol

So which is better:

A) Letting the users simply render two sets of images, with one being 
used as a texture in another; this is very easy, requiring only two 
scenes that are almost identical (the camera is positioned differently 
in the camera view), or

B) Writing the renderer so that a texture can be based on the camera 
view (with rays being traced, anti-aliased, usw...) of another place in 
the scene.

We already have A up and running just fine.  I use it all the time. 
Leaving out the trouble of render-wrangling a second set of images 
(i.e., putting them in a different directory after rendering), A is no 
more work than B for the user; the camera still has to be positioned and 
test-rendered for either method.

B will require work from programmers who are already busy with stuff 
that can't be pushed off onto the user, or if they can be, only at the 
expense of significantly reducing the user's productivity.  It also 
increases the size of the renderer, in order to provide the extra feature.

Now if the memory situation is very tight, then method B does have the 
advantage that the texture does not need to be stored in memory; but in 
almost every case of this kind the video image occupies less than a 
quarter of the final view, which means that the texture requires only 
one-quarter of the memory of the rendered image.  If your memory 
situation is that tight, maybe you need to get some more RAM.

Regards,
John


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.